43 research outputs found
Detection of dependence patterns with delay
The Unitary Events (UE) method is a popular and efficient method used this
last decade to detect dependence patterns of joint spike activity among
simultaneously recorded neurons. The first introduced method is based on binned
coincidence count \citep{Grun1996} and can be applied on two or more
simultaneously recorded neurons. Among the improvements of the methods, a
transposition to the continuous framework has recently been proposed in
\citep{muino2014frequent} and fully investigated in \citep{MTGAUE} for two
neurons. The goal of the present paper is to extend this study to more than two
neurons. The main result is the determination of the limit distribution of the
coincidence count. This leads to the construction of an independence test
between neurons. Finally we propose a multiple test procedure via a
Benjamini and Hochberg approach \citep{Benjamini1995}. All the theoretical
results are illustrated by a simulation study, and compared to the UE method
proposed in \citep{Grun2002}. Furthermore our method is applied on real data
The \u3cem\u3eX\u3c/em\u3e-Alter Algorithm: A Parameter-Free Method of Unsupervised Clustering
Using quantization techniques, Laloë (2010) defined a new clustering algorithm called Alter. This L1-based algorithm is shown to be convergent but suffers two major flaws. The number of clusters, K, must be supplied by the user and the computational cost is high. This article adapts the X-means algorithm (Pelleg & Moore, 2000) to solve both problems
Plug-in estimation of level sets in a non-compact setting with applications in multivariate risk theory
This paper deals with the problem of estimating the level sets of an unknown distribution function . A plug-in approach is followed. That is, given a consistent estimator of , we estimate the level sets of by the level sets of . In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results.Level sets ; Distribution function ; Plug-in estimation ; Hausdorff distance ; Conditional Tail Expectation
Estimation of extreme -multivariate expectiles with functional covariates
The present article is devoted to the semi-parametric estimation of
multivariate expectiles for extreme levels. The considered multivariate risk
measures also include the possible conditioning with respect to a functional
covariate, belonging to an infinite-dimensional space. By using the first order
optimality condition, we interpret these expectiles as solutions of a
multidimensional nonlinear optimum problem. Then the inference is based on a
minimization algorithm of gradient descent type, coupled with consistent kernel
estimations of our key statistical quantities such as conditional quantiles,
conditional tail index and conditional tail dependence functions. The method is
valid for equivalently heavy-tailed marginals and under a multivariate regular
variation condition on the underlying unknown random vector with arbitrary
dependence structure. Our main result establishes the consistency in
probability of the optimum approximated solution vectors with a speed rate.
This allows us to estimate the global computational cost of the whole procedure
according to the data sample size.Comment: 21 page
Plug-in estimation of level sets in a non-compact setting with applications in multivariate risk theory
International audienceThis paper deals with the problem of estimating the level sets of an unknown distribution function . A plug-in approach is followed. That is, given a consistent estimator of , we estimate the level sets of by the level sets of . In our setting no compactness property is a priori required for the level sets to estimate. We state consistency results with respect to the Hausdorff distance and the volume of the symmetric difference. Our results are motivated by applications in multivariate risk theory. In this sense we also present simulated and real examples which illustrate our theoretical results
Retarded versus time-nonlocal quantum kinetic equations
The finite duration of the collisions in Fermionic systems as expressed by
the retardation time in non-Markovian Levinson-type kinetic equations is
discussed in the quasiclassical limit. We separate individual contributions
included in the memory effect resulting in (i) off-shell tails of the Wigner
distribution, (ii) renormalization of scattering rates and (iii) of the
single-particle energy, (iv) collision delay and (v) related non-local
corrections to the scattering integral. In this way we transform the Levinson
equation into the Landau-Silin equation extended by the non-local corrections
known from the theory of dense gases. The derived nonlocal kinetic equation
unifies the Landau theory of quasiparticle transport with the classical kinetic
theory of dense gases. The space-time symmetry is discussed versus
particle-hole symmetry and a solution is proposed which transforms these two
exclusive pictures into each other.Comment: slightly revised, 19 page
Sur quelques problèmes d'apprentissage supervisé et non supervisé
The goal of this thesis is to contribute to the domain of statistical learning, and includes the development of methods that can deal with functional data. In the first section, we develop a Nearest Neighbor approach for functional regression. In the second, we study the properties of a quantization method in infinitely-dimensional spaces. We then apply this approach to a behavioral study of schools of anchovies. The last section is dedicated to the problem of estimating level sets of the regression function in a multivariate context.L'objectif de cette Thèse est d'apporter une contribution au problème de l'apprentissage statistique, notamment en développant des méthodes pour prendre en compte des données fonctionnelles. Dans la première partie, nous développons une approche de type plus proches voisins pour la régression fonctionnelle. Dans la deuxième, nous étudions les propriétés de la méthode de quantification dans des espaces de dimension infinie. Nous appliquons ensuite cette méthode pour réaliser une étude comportementale de bancs d'anchois. Enfin, la dernière partie est dédiée au problème de l'estimation des ensembles de niveaux de la fonction de régression dans un cadre multivarié
A k-nearest neighbor approach for functional regression
International audienceLet (X, Y) be a random pair taking values in H × R, where H is an infinite dimensional separable Hilbert space. We establish weak consistency of a nearest neighbor-type estimator of the regression function of Y on X based on independent observations of the pair (X, Y). As a general strategy, we propose to reduce the infinite dimension of H by considering only the first d coefficients of an expansion of X in an orthonormal system of H, and then to perform k-nearest neighbor regression in R^d. Both the dimension and the number of neighbors are automatically selected from the observations using a simple data-dependent splitting device
A k-nearest neighbor approach for functional regression
Let (X,Y) be a random pair taking values in , where is an infinite dimensional separable Hilbert space. We establish weak consistency of a nearest neighbor type estimator of the regression function of Y on X based on independent observations of the pair (X,Y). As a general strategy, we propose to reduce the infinite dimension of by considering only the first d coefficients of an expansion of X in an orthonormal system of , and then to perform k-nearest neighbor regression in . Both the dimension and the number of neighbors are automatically selected from the observations using a simple data-dependent splitting device.
Accélération pratique de l'algorithme Alter : un outil efficace de classification L1
International audienceNous nous intéressons au problème consistant à établir un algorithme de classification performant et calculant lui-même le nombre de clusters k. Pour cela, nous nous appuyons sur l'algorithme Alter de Laloë (2009) qui permet d'obtenir une classification convergente mais, d'une part, ne calcule pas le nombre de groupes et, d'autre part, prend un temps considérable dès que ce nombre de groupes est supérieur à 2. Afin de remédier à ces deux inconvénients, nous couplons l'algorithme Alter à celui des X-means de Pelleg et Moore (2000). Nous rajoutons également une étape de recollement une fois que l'étape de classification est terminée afin de corriger d'éventuelles erreurs. L'algorithme obtenu permet d'obtenir une estimation efficace du nombre de clusters ainsi qu'une bonne classification